2 research outputs found

    3D Deep Learning on Medical Images: A Review

    Full text link
    The rapid advancements in machine learning, graphics processing technologies and availability of medical imaging data has led to a rapid increase in use of deep learning models in the medical domain. This was exacerbated by the rapid advancements in convolutional neural network (CNN) based architectures, which were adopted by the medical imaging community to assist clinicians in disease diagnosis. Since the grand success of AlexNet in 2012, CNNs have been increasingly used in medical image analysis to improve the efficiency of human clinicians. In recent years, three-dimensional (3D) CNNs have been employed for analysis of medical images. In this paper, we trace the history of how the 3D CNN was developed from its machine learning roots, give a brief mathematical description of 3D CNN and the preprocessing steps required for medical images before feeding them to 3D CNNs. We review the significant research in the field of 3D medical imaging analysis using 3D CNNs (and its variants) in different medical areas such as classification, segmentation, detection, and localization. We conclude by discussing the challenges associated with the use of 3D CNNs in the medical imaging domain (and the use of deep learning models, in general) and possible future trends in the field.Comment: 13 pages, 4 figures, 2 table

    Application of graph convolutional neural networks to Alzheimer's and Parkinson's disease classification

    No full text
    Identifying connectivity patterns of the human structural connectome plays an important role in diagnosing various neurodegenerative conditions including Alzheimer’s disease (AD) and Parkinson’s disease (PD). The structural connectome is measured by diffusion weighted imaging (DTI) scans and is represented by apgraph wherepthepnodes are brainpregions of interestspandpthepedges representpthe strengths of white matter connections between the regions. Deep neural networks (DNNs) models have been explored for classification of brain diseases by encoding the structural connectome. However, feedforward deep neural networks models operate on vectorized connectivity patterns and convolutional neural networks operate on regular structures of data. Therefore, DNN models have been unable to capitalize on the graphical structure inherent in the connectome data. In this thesis, we introduce graph convolutional neural network (GCN) to encode the structural connectome in order to classify brain scans. GCNs have been successful in modelling graph-structured data in many applications. In this work, we introduce population based GCN (pGCN) for disease classification using brain connectivity features obtained from structural MRI (sMRI) and diffusion tensor imaging (DTI) brain scans. Information of patient samples are represented in a population graph in pGCN. Structural connectome features are extracted from DTI scans and grey matter anatomical features are extracted from sMRI images, which were then combined and processed in a pGCN. Wepshowpthatpthepfusionpof sMRIpandpDTIptopsupplementpeach otherpforpfeatureprepresentation and improves classification performance. Furthermore, we propose an ensemble learning strategy for training multiple instances of pGCN as random graph embeddings and demonstrate improved classification performance. Ensemble learning increases the diversity of data available for training and simplifies the choices for graph construction. In order to incorporate genomic features in disease classification, we demonstrate how population graph can be derived from gene expression data. By using large public-domain patient database, we demonstrate that ensemble learning of pGCN combining DTI and sMRI scans and genomic data improves the classification of Alzheimer’s disease (AD) and Parkinson’s disease (PD).Master of Engineerin
    corecore